Sfnet: Faster and Accurate Semantic Segmentation Via Semantic Flow
نویسندگان
چکیده
Abstract In this paper, we focus on exploring effective methods for faster and accurate semantic segmentation. A common practice to improve the performance is attain high-resolution feature maps with strong representation. Two strategies are widely used: atrous convolutions pyramid fusion, while both either computationally intensive or ineffective. Inspired by Optical Flow motion alignment between adjacent video frames, propose a Alignment Module (FAM) learn Semantic of levels broadcast high-level features effectively efficiently. Furthermore, integrating our FAM standard structure exhibits superior over other real-time methods, even lightweight backbone networks, such as ResNet-18 DFNet. Then further speed up inference procedure, also present novel Gated Dual directly align low-resolution where term improved version network SFNet-Lite. Extensive experiments conducted several challenging datasets, results show effectiveness SFNet particular, when using Cityscapes test set, SFNet-Lite series achieve 80.1 mIoU running at 60 FPS 78.8 120 STDC RTX-3090. Moreover, unify four driving datasets (i.e., Cityscapes, Mapillary, IDD, BDD) into one large dataset, which named Unified Driving Segmentation (UDS) dataset. It contains diverse domain style information. We benchmark representative works UDS. Both still best accuracy trade-off UDS, serves baseline in setting. The code models publicly available https://github.com/lxtGH/SFSegNets .
منابع مشابه
Accurate Semantic Annotations via Pattern Matching
This paper addresses the problem of performing accurate semantic annotations in a large corpus. The task of creating a sense tagged corpus is different from the word sense disambiguation problem in that the semantic annotations have to be highly accurate, even if the price to be paid is lower coverage. While the state-of-the-art in word sense disambiguation does not exceed 70% precision, we wan...
متن کاملCascaded Scene Flow Prediction using Semantic Segmentation
Given two consecutive frames from a pair of stereo cameras, 3D scene flow methods simultaneously estimate the 3D geometry and motion of the observed scene. Many existing approaches use superpixels for regularization, but may predict inconsistent shapes and motions inside rigidly moving objects. We instead assume that scenes consist of foreground objects rigidly moving in front of a static backg...
متن کاملFast and Accurate Semantic Mapping through Geometric-based Incremental Segmentation
We propose an efficient and scalable method for incrementally building a dense, semantically annotated 3D map in real-time. The proposed method assigns class probabilities to each region, not each element (e.g., surfel and voxel), of the 3D map which is built up through a robust SLAM framework and incrementally segmented with a geometric-based segmentation method. Differently from all other app...
متن کاملSemantic Instance Segmentation via Deep Metric Learning
We propose a new method for semantic instance segmentation, by first computing how likely two pixels are to belong to the same object, and then by grouping similar pixels together. Our similarity metric is based on a deep, fully convolutional embedding model. Our grouping method is based on selecting all points that are sufficiently similar to a set of “seed points’, chosen from a deep, fully c...
متن کاملJoint Optical Flow and Temporally Consistent Semantic Segmentation
The importance and demands of visual scene understanding have been steadily increasing along with the active development of autonomous systems. Consequently, there has been a large amount of research dedicated to semantic segmentation and dense motion estimation. In this paper, we propose a method for jointly estimating optical flow and temporally consistent semantic segmentation, which closely...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Computer Vision
سال: 2023
ISSN: ['0920-5691', '1573-1405']
DOI: https://doi.org/10.1007/s11263-023-01875-x